神经网络(NNS)在广泛的应用中的成功导致对理解这些模型的潜在学习动态的兴趣增加。在本文中,我们通过采用图形透视并调查NNS图形结构与其性能之间的关系来超越学习动态的描述。具体地,我们提出(1)表示神经网络学习过程作为时间不断发展的图表(即,通过时代的一系列静态图形快照),(2)在简单的时间内捕获NN期间NN的结构变化发明内容,(3)利用结构摘要,以预测底层NN在分类或回归任务中的准确性。对于NNS的动态图形表示,我们探索完全连接和卷积层的结构表示,这是强大的NN模型的关键组件。我们的分析表明,图形统计数据简单摘要,如加权程度和特征向量中心,只能用于准确地预测NNS的性能。例如,基于Lenet架构的5次训练时期构造的基于加权的基于程度的概要,实现了超过93%的分类精度。我们的发现对于不同的NN架构,包括Lenet,VGG,AlexNet和Reset。
translated by 谷歌翻译
强化学习(RL)的概括对于RL算法的实际部署至关重要。提出了各种方案来解决概括问题,包括转移学习,多任务学习和元学习,以及健壮和对抗性的强化学习。但是,各种方案都没有统一的表述,也没有跨不同方案的方法的全面比较。在这项工作中,我们提出了一个游戏理论框架,用于加强学习的概括,名为Girl,在该框架中,RL代理在一组任务中对对手进行了训练,对手可以在给定阈值内对任务进行分配。使用不同的配置,女孩可以减少上述各种方案。为了解决女孩,我们将广泛使用的方法改编在游戏理论中,策略空间响应Oracle(PSRO)进行以下三个重要修改:i)我们使用模型 - 静脉元学习(MAML)作为最佳反应甲骨文,II)我们提出了一个经过修改的投影复制的动力学,即R-PRD,该动力学确保了对手的计算元策略在阈值中,并且iii)我们还为测试过程中的多个策略进行了几次学习的协议。关于穆约科科环境的广泛实验表明,我们提出的方法可以胜过现有的基线,例如MAML。
translated by 谷歌翻译
从任意堕落状态中起床是一种基本的人类技能。现有的学习这种技能的方法通常会产生高度动态和不稳定的起床动作,这不像人类的起床策略,或者基于跟踪记录的人类起床运动。在本文中,我们提出了一种使用强化学习的分阶段方法,而无需求助于运动捕获数据。该方法首先利用了强大的字符模型,从而有助于发现解决方案模式。然后,第二阶段学会了调整控制策略,以逐步与角色的较弱版本一起使用。最后,第三阶段学习控制政策,这些政策可以以较慢的速度重现较弱的起床动作。我们表明,在多个运行中,该方法可以发现各种各样的起床策略,并以各种速度执行它们。结果通常会产生采用最终站立策略的策略,这些策略是从所有初始状态中看到的恢复动作所共有的。但是,我们还发现了对俯卧和仰卧初始堕落状态的不同策略的政策。学识渊博的起床控制策略通常具有明显的静态稳定性,即,在起床运动过程中,它们可以在各个点停下来。我们进一步测试了新的限制场景的方法,例如在演员表中有一条腿和手臂。
translated by 谷歌翻译
交叉设备联合学习(FL)是一种分布式学习范例,具有几种挑战,这些挑战将其区分离为传统的分布式学习,每个设备上的系统特征的可变性,以及数百万客户端与主要服务器协调。文献中描述的大多数FL系统是同步的 - 它们从各个客户端执行模型更新的同步聚合。缩放同步FL是挑战,因为增加了并行培训的客户数量导致训练速度的回报递减,类似于大批培训。而且,陷阱妨碍了同步流动训练。在这项工作中,我们概述了一种生产异步流行系统设计。我们的工作解决了上述问题,一些系统设计挑战及其解决方案的草图,并触及了为数百万客户建立生产流系统的原则。凭经验,我们证明异步流量在跨越近一亿台设备时比同步液更快地收敛。特别地,在高并发设置中,异步FL速度快5倍,并且具有比同步FL更小的通信开销差距。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译